876 research outputs found
Decision by sampling
We present a theory of decision by sampling (DbS) in which, in contrast with traditional
models, there are no underlying psychoeconomic scales. Instead, we assume that an
attribute's subjective value is constructed from a series of binary, ordinal comparisons to a
sample of attribute values drawn from memory and is its rank within the sample. We assume
that the sample reflects both the immediate distribution of attribute values from the current
decision's context and also the background, real-world distribution of attribute values. DbS
accounts for concave utility functions; losses looming larger than gains; hyperbolic temporal
discounting; and the overestimation of small probabilities and the underestimation of large
probabilities
Prospect relativity: How choice options influence decision under risk
In many theories of decision under risk (e.g., expected utility theory, rank-dependent utility theory, and prospect theory), the utility of a prospect is independent of other options in the choice set. The experiments presented here show a large effect of the available options, suggesting instead that prospects are valued relative to one another. The judged certainty equivalent for a prospect is strongly influenced by the options available. Similarly, the selection of a preferred prospect is strongly influenced by the prospects available, Alternative theories of decision under risk (e.g., the stochastic difference model, multialternative decision field theory, and range frequency theory), where prospects are valued relative to one another, can provide an account of these context effects
The Bayesian sampler : generic Bayesian inference causes incoherence in human probability
Human probability judgments are systematically biased, in apparent tension with Bayesian models of cognition. But perhaps the brain does not represent probabilities explicitly, but approximates probabilistic calculations through a process of sampling, as used in computational probabilistic models in statistics. Naïve probability estimates can be obtained by calculating the relative frequency of an event within a sample, but these estimates tend to be extreme when the sample size is small. We propose instead that people use a generic prior to improve the accuracy of their probability estimates based on samples, and we call this model the Bayesian sampler. The Bayesian sampler trades off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases and heuristics such as conservatism and the conjunction fallacy. The approach turns out to provide a rational reinterpretation of “noise” in an important recent model of probability judgment, the probability theory plus noise model (Costello & Watts, 2014, 2016a, 2017; Costello & Watts, 2019; Costello, Watts, & Fisher, 2018), making equivalent average predictions for simple events, conjunctions, and disjunctions. The Bayesian sampler does, however, make distinct predictions for conditional probabilities and distributions of probability estimates. We show in 2 new experiments that this model better captures these mean judgments both qualitatively and quantitatively; which model best fits individual distributions of responses depends on the assumed size of the cognitive sample
Bayesian brains without probabilities
Bayesian explanations have swept through cognitive science over the past two decades, from intuitive physics and causal learning, to perception, motor control and language. Yet people flounder with even the simplest probability questions. What explains this apparent paradox? How can a supposedly Bayesian brain reason so poorly with probabilities? In this paper, we propose a direct and perhaps unexpected answer: that Bayesian brains need not represent or calculate probabilities at all and are, indeed, poorly adapted to do so. Instead, the brain is a Bayesian sampler. Only with infinite samples does a Bayesian sampler conform to the laws of probability; with finite samples it systematically generates classic probabilistic reasoning errors, including the unpacking effect, base-rate neglect, and the conjunction fallacy
The sampling brain
Alday, Schlesewsky, and Bornkessel-Schlesewsky [1] provide a stimulating commentary on the issues discussed in our paper [2], highlighting important connections between sampling, Bayesian inference, neural networks, free energy, and basins of attraction. We trace here some relevant history of computational theories of the brain
Recommended from our members
Depth sensitive sampling of implanted species in Genesis Collectors using UV laser ablation and SIMS
SIMS profiling of laser abalation pits in CVD diamond implanted with oxygen- 18 shows that homogenised 193nm excimer laser beam can successfully ablate a layer a few nm thick, removing surface contamination without signicant loss of implanted sample
Identification of probabilities
Within psychology, neuroscience and artificial intelligence, there has been increasing interest in the proposal that the brain builds probabilistic models of sensory and linguistic input: that is, to infer a probabilistic model from a sample. The practical problems of such inference are substantial: the brain has limited data and restricted computational resources. But there is a more fundamental question: is the problem of inferring a probabilistic model from a sample possible even in principle? We explore this question and find some surprisingly positive and general results. First, for a broad class of probability distributions characterized by computability restrictions, we specify a learning algorithm that will almost surely identify a probability distribution in the limit given a finite i.i.d. sample of sufficient but unknown length. This is similarly shown to hold for sequences generated by a broad class of Markov chains, subject to computability assumptions. The technical tool is the strong law of large numbers. Second, for a large class of dependent sequences, we specify an algorithm which identifies in the limit a computable measure for which the sequence is typical, in the sense of Martin-Löf (there may be more than one such measure). The technical tool is the theory of Kolmogorov complexity. We analyze the associated predictions in both cases. We also briefly consider special cases, including language learning, and wider theoretical implications for psychology
Who uses foodbanks and why? Exploring the impact of financial strain and adverse life events on food insecurity
Background
Rising use of foodbanks highlights food insecurity in the UK. Adverse life events (e.g. unemployment, benefit delays or sanctions) and financial strains are thought to be the drivers of foodbank use. This research aimed to explore who uses foodbanks, and factors associated with increased food insecurity.
Methods
We surveyed those seeking help from front line crisis providers from foodbanks (N = 270) and a comparison group from Advice Centres (ACs) (N = 245) in relation to demographics, adverse life events, financial strain and household food security.
Results
About 55.9% of foodbank users were women and the majority were in receipt of benefits (64.8%). Benefit delays (31.9%), changes (11.1%) and low income (19.6%) were the most common reasons given for referral. Compared to AC users, there were more foodbank users who were single men without children, unemployed, currently homeless, experiencing more financial strain and adverse life events (P = 0.001). Food insecurity was high in both populations, and more severe if they also reported financial strain and adverse life events.
Conclusions
Benefit-related problems appear to be a key reason for foodbank referral. By comparison with other disadvantaged groups, foodbank users experienced more financial strain, adverse life events, both increased the severity of food insecurity
Different Multilayer Substrate Approaches to Improve Array Antenna Characteristics for Radar Applications
The aim of this paper is to investigate deeply in multi-layer substrate technique as a way of improving the characteristics of patch array antenna for electronic scanning radar application. The basic array antenna consists of 8 patches mounted on a FR-4 substrate and operating at 3GHz frequency. The feeding technique is microstrip technology. This structure has some disadvantages as a poor gain and a narrow bandwidth. In fact, the obtained gain value does not exceed 7 dB which could be explained by the lossy nature of the FR4 substrate. On the other side, the narrow bandwidth is caused by the microstrip limitations. For this reason, the technique of multi-layer substrate is proposed in this paper. Many approaches are investigated and the distance between the layers is studied. The design and simulations of each approach are performed under the tool Advanced Design System of Keysight Company. A comparison between simulation results of all approaches including simulation results of the basic array antenna will be analyzed
- …